Weibull maximum distribution (weibull_max)#
The Weibull maximum distribution (also called the reversed Weibull or Type III extreme value law) is a continuous distribution with a finite upper endpoint. It’s a natural model when outcomes can be arbitrarily small, but cannot exceed some maximum (a hard cap).
A very useful way to think about it:
If \(Y\ge 0\) is a standard Weibull (minimum) random variable, then \(X=\mu-Y\) has a Weibull-maximum distribution with upper endpoint \(\mu\).
1) Title & classification#
Item |
Value |
|---|---|
Name |
Weibull maximum ( |
Type |
Continuous |
Support |
\(x \in (-\infty,\mu]\) |
Parameter space |
shape \(c>0\), location \(\mu\in\mathbb{R}\), scale \(\lambda>0\) |
We’ll use SciPy’s convention: shape is c, and loc=\mu, scale=\lambda.
What you’ll be able to do after this notebook#
write down the PDF/CDF/quantile in a clean parameterization
interpret how \((c,\lambda,\mu)\) change the shape and support
derive \(\mathbb{E}[X]\), \(\mathrm{Var}(X)\), and the log-likelihood
sample from
weibull_maxusing NumPy only (inverse transform)use
scipy.stats.weibull_maxfor simulation and fitting
import numpy as np
import plotly.express as px
import plotly.graph_objects as go
import os
import plotly.io as pio
from scipy import stats
from scipy.special import gamma
pio.templates.default = "plotly_white"
pio.renderers.default = os.environ.get("PLOTLY_RENDERER", "notebook")
np.set_printoptions(precision=6, suppress=True)
rng = np.random.default_rng(42)
2) Intuition & motivation#
What it models#
The key structural feature is the finite upper endpoint \(\mu\):
\(X\) can take very negative values, but it cannot exceed \(\mu\).
Equivalently, the distance to the cap
has a standard Weibull (minimum) distribution.
Typical real-world use cases#
Extreme value modeling (block maxima) when the underlying phenomenon has a hard upper bound (material strengths, physical limits, bounded scores).
“Only-negative” noise models: \(\varepsilon\le 0\) with a tunable left tail, useful when observations are systematically below a theoretical maximum.
Time-to-failure style modeling via \(Y=\mu-X\) (a standard Weibull) while keeping the original variable \(X\) bounded above.
Relations to other distributions#
Reflection/shift of
weibull_min: if \(Y\sim\texttt{weibull\_min}(c,\,\text{scale}=\lambda)\) then \(X=\mu-Y\sim\texttt{weibull\_max}(c,\,\text{loc}=\mu,\,\text{scale}=\lambda)\).Generalized extreme value (GEV): the reversed Weibull is the Type III domain (finite upper endpoint). In GEV terms, it corresponds to a negative shape parameter \(\xi<0\).
3) Formal definition#
We parameterize with shape \(c>0\), location (upper endpoint) \(\mu\in\mathbb{R}\), and scale \(\lambda>0\).
Define the nonnegative “distance to the endpoint”
CDF#
PDF#
Differentiate the CDF (for \(x\le \mu\)):
and \(f(x)=0\) for \(x>\mu\).
Quantile function (inverse CDF)#
For \(p\in(0,1)\),
This makes sampling straightforward (inverse transform).
def weibull_max_logpdf(x, c, loc=0.0, scale=1.0):
'''Log-PDF of weibull_max(c, loc, scale) with support (-inf, loc].'''
x = np.asarray(x, dtype=float)
c = float(c)
loc = float(loc)
scale = float(scale)
if c <= 0 or scale <= 0:
raise ValueError("c must be > 0 and scale must be > 0")
y = (loc - x) / scale # y >= 0 on the support
out = np.full_like(x, -np.inf, dtype=float)
# Special-case c=1 to avoid 0 * log(0) at the endpoint.
if np.isclose(c, 1.0):
mask = y >= 0
out[mask] = -np.log(scale) - y[mask]
return out
mask_pos = y > 0
out[mask_pos] = np.log(c / scale) + (c - 1) * np.log(y[mask_pos]) - y[mask_pos] ** c
mask_zero = y == 0
if np.any(mask_zero):
out[mask_zero] = np.inf if c < 1 else -np.inf
return out
def weibull_max_pdf(x, c, loc=0.0, scale=1.0):
return np.exp(weibull_max_logpdf(x, c, loc=loc, scale=scale))
def weibull_max_cdf(x, c, loc=0.0, scale=1.0):
'''CDF of weibull_max(c, loc, scale).'''
x = np.asarray(x, dtype=float)
c = float(c)
loc = float(loc)
scale = float(scale)
if c <= 0 or scale <= 0:
raise ValueError("c must be > 0 and scale must be > 0")
y = (loc - x) / scale
out = np.ones_like(x, dtype=float)
mask = y >= 0
out[mask] = np.exp(-(y[mask] ** c))
return out
def weibull_max_ppf(p, c, loc=0.0, scale=1.0):
'''Quantile function (inverse CDF).'''
p = np.asarray(p, dtype=float)
c = float(c)
loc = float(loc)
scale = float(scale)
if c <= 0 or scale <= 0:
raise ValueError("c must be > 0 and scale must be > 0")
if np.any((p < 0) | (p > 1)):
raise ValueError("p must be in [0, 1]")
out = np.full_like(p, np.nan, dtype=float)
out[p == 0] = -np.inf
out[p == 1] = loc
mask = (p > 0) & (p < 1)
out[mask] = loc - scale * (-np.log(p[mask])) ** (1.0 / c)
return out
def weibull_max_entropy(c, scale=1.0):
'''Differential entropy H(X) (independent of loc).'''
c = float(c)
scale = float(scale)
if c <= 0 or scale <= 0:
raise ValueError("c must be > 0 and scale must be > 0")
return 1.0 + np.log(scale / c) + np.euler_gamma * (1.0 - 1.0 / c)
def weibull_max_moments(c, loc=0.0, scale=1.0):
'''Return (mean, variance, skewness, excess_kurtosis).'''
c = float(c)
loc = float(loc)
scale = float(scale)
if c <= 0 or scale <= 0:
raise ValueError("c must be > 0 and scale must be > 0")
g1 = gamma(1.0 + 1.0 / c)
g2 = gamma(1.0 + 2.0 / c)
g3 = gamma(1.0 + 3.0 / c)
g4 = gamma(1.0 + 4.0 / c)
mean = loc - scale * g1
var = scale**2 * (g2 - g1**2)
mu3 = g3 - 3 * g2 * g1 + 2 * g1**3
skew = -mu3 / (g2 - g1**2) ** 1.5
mu4 = g4 - 4 * g3 * g1 + 6 * g2 * g1**2 - 3 * g1**4
excess_kurt = mu4 / (g2 - g1**2) ** 2 - 3
return mean, var, skew, excess_kurt
4) Moments & properties#
A convenient trick is to work with
Then \(Y\) is a standard Weibull (minimum) random variable with the same shape \(c\) and scale \(\lambda\).
Mean, variance, skewness, kurtosis#
Let \(g_r = \Gamma\!\left(1+\frac{r}{c}\right)\).
Quantity |
Value |
|---|---|
Mean |
\(\mathbb{E}[X]=\mu-\lambda\,g_1\) |
Variance |
\(\mathrm{Var}(X)=\lambda^2\,(g_2-g_1^2)\) |
Skewness |
\(\gamma_1(X)=-\dfrac{g_3-3g_2g_1+2g_1^3}{(g_2-g_1^2)^{3/2}}\) |
Excess kurtosis |
\(\gamma_2(X)=\dfrac{g_4-4g_3g_1+6g_2g_1^2-3g_1^4}{(g_2-g_1^2)^2}-3\) |
Because \(X\) is an affine transform of \(Y\), all moments exist for any \(c>0\).
MGF / characteristic function#
There is generally no simple closed-form MGF/CF for arbitrary \(c\), but we can still write them cleanly.
Write \(X=\mu-\lambda Z\) where \(Z\ge 0\) has the standard Weibull PDF \(c z^{c-1}e^{-z^c}\).
The MGF (when finite) is a Laplace transform:
which always exists for \(t\ge 0\) (since \(X\le\mu\) implies \(e^{tX}\le e^{\mu t}\)).
The existence for \(t<0\) depends on how fast the left tail decays:
\(c>1\): MGF exists for all real \(t\).
\(c=1\): exists only for \(t>-1/\lambda\) (shifted negative exponential).
\(0<c<1\): diverges for any \(t<0\).
A useful series expansion around \(t=0\) (when it converges) comes from moments:
The characteristic function always exists:
Entropy#
The differential entropy (independent of \(\mu\)) is
where \(\gamma\approx 0.57721\) is the Euler–Mascheroni constant.
c_ex, loc_ex, scale_ex = 1.7, 2.0, 1.5
mean_th, var_th, skew_th, exkurt_th = weibull_max_moments(c_ex, loc=loc_ex, scale=scale_ex)
entropy_th = weibull_max_entropy(c_ex, scale=scale_ex)
dist = stats.weibull_max(c_ex, loc=loc_ex, scale=scale_ex)
mean_s, var_s, skew_s, exkurt_s = dist.stats(moments="mvsk")
entropy_s = dist.entropy()
{
"mean (theory)": mean_th,
"mean (scipy)": float(mean_s),
"var (theory)": var_th,
"var (scipy)": float(var_s),
"skew (theory)": skew_th,
"skew (scipy)": float(skew_s),
"excess_kurtosis (theory)": exkurt_th,
"excess_kurtosis (scipy)": float(exkurt_s),
"entropy (theory)": entropy_th,
"entropy (scipy)": float(entropy_s),
}
{'mean (theory)': 0.6616332462510139,
'mean (scipy)': 0.6616332462510139,
'var (theory)': 0.656673361303171,
'var (scipy)': 0.656673361303171,
'skew (theory)': -0.865023442264403,
'skew (scipy)': -0.8650234422644058,
'excess_kurtosis (theory)': 0.7723789795422564,
'excess_kurtosis (scipy)': 0.7723789795422564,
'entropy (theory)': 1.1125138955348604,
'entropy (scipy)': 1.1125138955348604}
5) Parameter interpretation#
We’ll use SciPy’s parameters: c (shape), loc=\mu (upper endpoint), scale=\lambda.
Location \(\mu\) (loc)#
Sets the finite upper endpoint.
The support is \((-\infty,\mu]\).
Shifting \(\mu\) shifts the entire distribution.
Scale \(\lambda\) (scale)#
Controls typical distance below the endpoint.
In the representation \(X=\mu-Y\), we have \(Y\sim\text{Weibull}(c,\lambda)\).
Larger \(\lambda\) produces heavier spread to the left (more mass far below \(\mu\)).
Shape \(c\) (c)#
Controls tail decay and behavior near the endpoint \(\mu\).
For \(0<c<1\), the density blows up at \(x\uparrow\mu\) (very sharp pile-up near the cap) and the left tail is relatively heavy.
For \(c=1\), \(\mu-X\) is exponential (memoryless in the distance-to-cap).
For \(c>1\), the density goes to 0 at \(x=\mu\) and the left tail decays faster.
# Shape changes (standardized loc=0, scale=1)
c_values = [0.6, 1.0, 1.8, 4.0]
x = np.linspace(-6, -1e-6, 800)
fig = go.Figure()
for c in c_values:
fig.add_trace(go.Scatter(x=x, y=weibull_max_pdf(x, c), mode="lines", name=f"c={c}"))
fig.update_layout(
title="weibull_max PDF shape for different c (loc=0, scale=1)",
xaxis_title="x",
yaxis_title="pdf(x)",
)
fig.show()
6) Derivations#
A standard and very reusable move is to transform to the distance from the endpoint:
Then \(Y\) has the usual Weibull (minimum) PDF
Expectation#
Compute \(\mathbb{E}[Y]\):
Substitute \(u=(y/\lambda)^c\) so \(y=\lambda u^{1/c}\) and \(dy=\lambda\,\frac{1}{c}u^{1/c-1}du\). Then
So
Variance#
Similarly,
Thus
Likelihood#
For i.i.d. data \(x_1,\dots,x_n\) with \(x_i\le \mu\), the log-likelihood is
A practical consequence: if \(\mu\) is known, you can fit \((c,\lambda)\) by fitting a standard Weibull to \(y_i=\mu-x_i\).
def weibull_max_loglik(x, c, loc=0.0, scale=1.0):
x = np.asarray(x, dtype=float)
return float(np.sum(weibull_max_logpdf(x, c, loc=loc, scale=scale)))
# Sanity check: log-likelihood matches SciPy's logpdf sum
c_true, loc_true, scale_true = 1.8, 2.0, 1.2
n = 5000
x_synth = stats.weibull_max(c_true, loc=loc_true, scale=scale_true).rvs(
size=n, random_state=rng
)
ll_manual = weibull_max_loglik(x_synth, c_true, loc=loc_true, scale=scale_true)
ll_scipy = float(
np.sum(stats.weibull_max(c_true, loc=loc_true, scale=scale_true).logpdf(x_synth))
)
ll_manual, ll_scipy
(-4263.328347635146, -4263.328347635146)
7) Sampling & simulation (NumPy-only)#
Use the inverse CDF:
A numerically convenient version uses the fact that \(E=-\ln U\sim\mathrm{Exp}(1)\):
This is fast, vectorized, and uses only NumPy.
def weibull_max_rvs_numpy(c, loc=0.0, scale=1.0, size=1, rng=None):
'''NumPy-only sampler via E ~ Exp(1), X = loc - scale * E**(1/c).'''
if rng is None:
rng = np.random.default_rng()
c = float(c)
loc = float(loc)
scale = float(scale)
if c <= 0 or scale <= 0:
raise ValueError("c must be > 0 and scale must be > 0")
e = rng.exponential(scale=1.0, size=size) # Exp(1)
return loc - scale * e ** (1.0 / c)
# Monte Carlo check
c_mc, loc_mc, scale_mc = 0.7, 1.5, 0.8
n_mc = 200_000
x_mc = weibull_max_rvs_numpy(c_mc, loc=loc_mc, scale=scale_mc, size=n_mc, rng=rng)
mean_th, var_th, *_ = weibull_max_moments(c_mc, loc=loc_mc, scale=scale_mc)
mean_mc = float(np.mean(x_mc))
var_mc = float(np.var(x_mc, ddof=0))
{
"mean (MC)": mean_mc,
"mean (theory)": mean_th,
"var (MC)": var_mc,
"var (theory)": var_th,
}
{'mean (MC)': 0.48703560503710946,
'mean (theory)': 0.48734119515417307,
'var (MC)': 2.2115187094928475,
'var (theory)': 2.1931747543380187}
8) Visualization#
We’ll visualize:
the PDF and a Monte Carlo histogram
the CDF and an empirical CDF
using the NumPy-only sampler from above.
def ecdf(samples):
x = np.sort(np.asarray(samples))
y = np.arange(1, x.size + 1) / x.size
return x, y
c_viz, loc_viz, scale_viz = 1.2, 2.0, 1.0
n_viz = 80_000
samples = weibull_max_rvs_numpy(c_viz, loc=loc_viz, scale=scale_viz, size=n_viz, rng=rng)
# pick a finite plotting window using quantiles (support extends to -inf)
x_lo = float(np.quantile(samples, 0.001))
x_hi = float(np.quantile(samples, 0.999))
x_grid = np.linspace(x_lo, min(x_hi, loc_viz - 1e-6), 700)
pdf_grid = weibull_max_pdf(x_grid, c_viz, loc=loc_viz, scale=scale_viz)
cdf_grid = weibull_max_cdf(x_grid, c_viz, loc=loc_viz, scale=scale_viz)
# PDF + histogram
fig_pdf = go.Figure()
fig_pdf.add_trace(
go.Histogram(
x=samples,
histnorm="probability density",
nbinsx=80,
name="samples",
opacity=0.55,
)
)
fig_pdf.add_trace(go.Scatter(x=x_grid, y=pdf_grid, mode="lines", name="theory pdf"))
fig_pdf.update_layout(
title="weibull_max: histogram vs PDF",
xaxis_title="x",
yaxis_title="density",
)
fig_pdf.show()
# CDF + ECDF
x_ecdf, y_ecdf = ecdf(samples)
fig_cdf = go.Figure()
fig_cdf.add_trace(go.Scatter(x=x_grid, y=cdf_grid, mode="lines", name="theory cdf"))
fig_cdf.add_trace(
go.Scatter(
x=x_ecdf[::200],
y=y_ecdf[::200],
mode="markers",
name="empirical cdf",
marker=dict(size=4),
)
)
fig_cdf.update_layout(
title="weibull_max: empirical CDF vs CDF",
xaxis_title="x",
yaxis_title="F(x)",
)
fig_cdf.show()
9) SciPy integration (scipy.stats.weibull_max)#
SciPy’s distribution is scipy.stats.weibull_max.
Shape is passed as
c.Shift and scale are handled by
locandscale.
Mapping to our notation:
We’ll use it to evaluate pdf, cdf, sample via rvs, and fit via fit.
c_true, loc_true, scale_true = 1.5, 3.0, 0.9
dist = stats.weibull_max(c_true, loc=loc_true, scale=scale_true)
x = np.linspace(loc_true - 5 * scale_true, loc_true - 1e-6, 500)
pdf = dist.pdf(x)
cdf = dist.cdf(x)
samples_scipy = dist.rvs(size=5000, random_state=rng)
# MLE fit (free loc)
c_hat, loc_hat, scale_hat = stats.weibull_max.fit(samples_scipy)
# If you know the upper endpoint in advance, fixing loc can be much more stable.
c_hat_fix, loc_hat_fix, scale_hat_fix = stats.weibull_max.fit(samples_scipy, floc=loc_true)
{
"true": (c_true, loc_true, scale_true),
"fit (free loc)": (c_hat, loc_hat, scale_hat),
"fit (fixed loc=true loc)": (c_hat_fix, loc_hat_fix, scale_hat_fix),
}
{'true': (1.5, 3.0, 0.9),
'fit (free loc)': (1.5248013159497331,
2.9960156700152405,
0.8989292504413816),
'fit (fixed loc=true loc)': (1.5377865960944537, 3.0, 0.9045985622077737)}
10) Statistical use cases#
Hypothesis testing#
A common question: “Do these data plausibly come from a weibull_max model?”
If parameters are known a priori, a one-sample KS test against the specified CDF is straightforward.
If parameters are estimated from the data, the standard KS p-value is not calibrated; a parametric bootstrap is a typical fix.
Bayesian modeling#
In a Bayesian setting, you put priors on \((c,\lambda,\mu)\) and update with the likelihood. A simple educational starting point is a grid posterior over \((c,\lambda)\) when \(\mu\) is known.
Generative modeling#
Because it has a hard upper endpoint, weibull_max is useful as a building block in generative stories where observations are bounded above:
one-sided error models (\(\varepsilon\le 0\))
simulation of capped phenomena with tunable tail thickness
# (A) KS test + parametric bootstrap (illustration)
c0, loc0, scale0 = 1.4, 2.0, 1.0
n0 = 400
d0 = stats.weibull_max(c0, loc=loc0, scale=scale0)
data0 = d0.rvs(size=n0, random_state=rng)
# KS test when the null distribution is fully specified
ks_stat, ks_p = stats.kstest(data0, d0.cdf)
# Now pretend loc is known but (c, scale) are estimated and calibrate with a bootstrap
c_hat, loc_hat, scale_hat = stats.weibull_max.fit(data0, floc=loc0)
d_hat = stats.weibull_max(c_hat, loc=loc0, scale=scale_hat)
ks_obs, _ = stats.kstest(data0, d_hat.cdf)
B = 200
ks_boot = np.empty(B)
for b in range(B):
sim = d_hat.rvs(size=n0, random_state=rng)
c_b, _, scale_b = stats.weibull_max.fit(sim, floc=loc0)
d_b = stats.weibull_max(c_b, loc=loc0, scale=scale_b)
ks_boot[b] = stats.kstest(sim, d_b.cdf).statistic
p_boot = float(np.mean(ks_boot >= ks_obs))
{
"KS (known params) statistic": float(ks_stat),
"KS (known params) pvalue": float(ks_p),
"KS (fit params) statistic": float(ks_obs),
"bootstrap pvalue (approx)": p_boot,
}
{'KS (known params) statistic': 0.02737955313267515,
'KS (known params) pvalue': 0.9170449942340866,
'KS (fit params) statistic': 0.023270299995085675,
'bootstrap pvalue (approx)': 0.855}
# (B) Simple grid posterior for (c, scale) when loc is known
loc_known = 2.0
x = data0
y = loc_known - x
if np.any(y < 0):
raise ValueError("Data must satisfy x <= loc_known")
# Log-likelihood for y ~ Weibull_min(c, scale)
def weibull_min_loglik(y, c, scale):
y = np.asarray(y, dtype=float)
c = float(c)
scale = float(scale)
if c <= 0 or scale <= 0:
return -np.inf
if np.any(y < 0):
return -np.inf
# sum log(c/scale) + (c-1) log(y/scale) - (y/scale)^c
# handle y=0 safely (log 0); in continuous data this is usually not hit.
logy = np.where(y > 0, np.log(y), -np.inf)
return float(
y.size * (np.log(c) - np.log(scale))
+ (c - 1) * np.sum(logy - np.log(scale))
- np.sum((y / scale) ** c)
)
# Priors: independent log-normal on c and scale (mildly informative)
def lognormal_logpdf(z, mu, sigma):
z = np.asarray(z, dtype=float)
if np.any(z <= 0):
return -np.inf
return float(
-np.sum(np.log(z))
- z.size * np.log(sigma * np.sqrt(2 * np.pi))
- 0.5 * np.sum(((np.log(z) - mu) / sigma) ** 2)
)
c_grid = np.linspace(0.3, 4.0, 220)
scale_grid = np.linspace(0.2, 2.5, 220)
log_post = np.empty((c_grid.size, scale_grid.size))
for i, c_val in enumerate(c_grid):
for j, s_val in enumerate(scale_grid):
ll = weibull_min_loglik(y, c_val, s_val)
lp = lognormal_logpdf(np.array([c_val]), mu=np.log(1.2), sigma=0.7) + lognormal_logpdf(
np.array([s_val]), mu=np.log(1.0), sigma=0.7
)
log_post[i, j] = ll + lp
# Stabilize and normalize
log_post -= np.max(log_post)
post = np.exp(log_post)
post /= np.sum(post)
# MAP estimate
idx = np.unravel_index(np.argmax(post), post.shape)
c_map = float(c_grid[idx[0]])
scale_map = float(scale_grid[idx[1]])
# Posterior means (grid approximation)
c_mean = float(np.sum(c_grid[:, None] * post))
scale_mean = float(np.sum(scale_grid[None, :] * post))
fig = px.imshow(
post,
x=scale_grid,
y=c_grid,
origin="lower",
aspect="auto",
labels={"x": "scale", "y": "c", "color": "posterior mass"},
title="Grid posterior over (c, scale) with known loc",
)
fig.show()
{
"MAP (c, scale)": (c_map, scale_map),
"posterior mean (c, scale)": (c_mean, scale_mean),
}
{'MAP (c, scale)': (1.3812785388127855, 0.9876712328767123),
'posterior mean (c, scale)': (1.3877289041166823, 0.994075094693863)}
# (C) Generative example: one-sided errors bounded above
x = np.linspace(0, 10, 250)
true_curve = 2.0 + np.sin(x)
# noise <= 0 with tunable left tail
noise = weibull_max_rvs_numpy(c=1.3, loc=0.0, scale=0.35, size=x.size, rng=rng)
obs = true_curve + noise
fig = go.Figure()
fig.add_trace(go.Scatter(x=x, y=true_curve, mode="lines", name="true curve"))
fig.add_trace(
go.Scatter(
x=x,
y=obs,
mode="markers",
name="observations",
marker=dict(size=5, opacity=0.7),
)
)
fig.update_layout(
title="One-sided (upper-bounded) noise via weibull_max",
xaxis_title="x",
yaxis_title="y",
)
fig.show()
11) Pitfalls#
Parameter constraints:
c>0andscale>0are required. The support is \(x\le\mu\); if any data exceed \(\mu\), the likelihood is zero.Confusing
weibull_minvsweibull_max:weibull_minlives on \([0,\infty)\) (after shifting), whileweibull_maxhas a finite upper endpoint.Endpoint behavior for \(c<1\): the PDF diverges as \(x\uparrow\mu\). This is mathematically fine (integrable), but can look alarming in plots and can create very large values if you evaluate exactly at \(x=\mu\).
Numerical under/overflow: for very negative \(x\), \(((\mu-x)/\lambda)^c\) can be huge; prefer
logpdfwhen doing inference and use quantiles to choose plotting ranges.Fitting
loc: whenlocis free, MLE can be sensitive. If you know the physical cap/endpoint, fixingloc(viafloc=...) can stabilize estimation.
12) Summary#
weibull_maxis a continuous distribution supported on \((-\infty,\mu]\) with a finite upper endpoint.The transform \(Y=\mu-X\) turns it into a standard Weibull (minimum) model on \([0,\infty)\).
Mean/variance and higher standardized moments are cleanly expressed using Gamma functions \(\Gamma(1+r/c)\).
Sampling is easy via inverse CDF: \(X=\mu-\lambda(-\ln U)^{1/c}\).
SciPy’s
scipy.stats.weibull_maxprovidespdf,cdf,rvs, andfit(often more stable iflocis fixed when known).